speaker detail

Harshad Khadilkar

Principal Research Scientist

company logo

Introducing Harshad Khadilkar! A distinguished data science expert, Harshad is committed to applying innovative solutions to real-world problems. As a Visiting Associate Professor at IIT Bombay, he brings academic rigor and practical experience to his work. With a diverse portfolio spanning air and rail transport, energy, port operations, and supply chain management, Harshad is adept at leveraging networked control, operations research, and machine learning techniques. Currently, he's pioneering the application of these tools in the finance domain. Join us at the DataHack Summit to glean insights from Harshad's expertise and uncover the potential of data-driven solutions!

Over the past two years, Generative AI has been thrown at every problem, from its traditional forte of text and image generation to planning problems and time series forecasting. Keeping in mind that transformer and diffusion-based models are both probabilistic inference engines, the question in every practitioner's mind is: How can I leverage GenAI without leaving myself at risk if (when?) things go wrong? This talk will focus on the responsible use of GenAI techniques in decision-making systems. We will cover the strengths and weaknesses of generative approaches in quantitative domains and discuss ways their vast pre-training can be leveraged while retaining the robustness of automated systems. To help the discussion, we will discuss how (and more importantly, if) we can use LLMs in time series forecasting, logistics planning, and where diffusion models are a better bet than transformer-based architectures.

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

Managing and scaling ML workloads have never been a bigger challenge in the past. Data scientists are looking for collaboration, building, training, and re-iterating thousands of AI experiments. On the flip side ML engineers are looking for distributed training, artifact management, and automated deployment for high performance

Read More

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details